11 research outputs found

    Supporting Teaching Staff: A Phenomenological Study Of The Innovation Readiness Of Teacher Support Staff

    Get PDF
    Educational institutions that want to successfully innovate regarding the education they provide must synchronise organisational growth with educational growth. To support such innovation, a maturity model can help identify successful teaching and learning practices by encouraging experimentation, collaboration and alignment with strategic goals. Although maturity models that support staff in the process of innovating education are valuable, they are scarce. This phenomenological study explored the views of staff from the Centre for Expertise in Learning and Teaching (CELT) on readiness for innovation at the University of Twente (UT). We surveyed staff members who were actively involved in projects or teacher initiatives aimed at educational innovation. The questionnaire consisted of 137 closed-ended multiplechoice questions (e.g. ‘Is teaching support guided by the latest research findings?’) and answers on a five-point scale (‘Not’, ‘Partly’, ‘Largely’, ‘Fully’ and ‘Don’t know’). The survey’s structure was based on that of the maturity model. The questions were divided into five categories of processes: learning (directly affecting pedagogy), development (related to the creation and maintenance of resources), support (related to support and operational management), evaluation (related to evaluation and quality control throughout its lifecycle) and organisation (related to institutional planning and management). After the survey results were analysed, respondents were invited to reflect on its outcomes, share their insights and suggest possible explanations for the results. In this paper, we present the educational support staff’s maturity model results and discuss how these results can influence the effects of teachers’ innovative practices

    Assessment of competency development in a challenge-based learning course: can coaches be objective assessors?

    Get PDF
    Higher education institutions aim to incorporate competency development into their engineering curricula, which can help engineering students become independent critical thinkers with entrepreneurial mindsets. However, no solid methods exist to evaluate the acquisition of these competencies. Such assessments’ objectivities are often ensured by distinguishing between who supervises a student group and who grades its project. The assessor’s active involvement in the learning process is essential for assessing competency development during the learning process, but such involvement may lead to assessor bias. This study aims to investigate whether and under what conditions coaches can be objective assessors. An intraclass correlation coefficient (ICC) was used to measure the level of agreement between assessors and coaches when using the same rubric to assess students’ deliverables. Four assessors and seven coaches from the University of Twente assessed 24 students’ individual learning processes based on individual reflection deliverables. The coaches assessed the students they supervised during a challenge-based learning (CBL) course, while the assessors were without participating in the learning process assigned randomly to students. The means were compared using SPSS, which indicated, among other things, that coaches generally awarded higher scores than assessors. This may indicate that coaches are biased because of their involvement in the learning process. Despite this, the results also indicate that coach assessment was in line with assessors when the coach was an appointed and experienced examiner

    ASSESSMENT OF COMPETENCY DEVELOPMENT IN A CHALLENGE BASED LEARNING COURSE: CAN COACHES BE OBJECTIVE ASSESSORS?

    No full text
    Higher education institutions aim to incorporate competency development into their engineering curricula, which can help engineering students become independent critical thinkers with entrepreneurial mindsets. However, no solid methods exist to evaluate the acquisition of these competencies. Such assessments’ objectivities are often ensured by distinguishing between who supervises a student group and who grades its project. The assessor’s active involvement in the learning process is essential for assessing competency development during the learning process, but such involvement may lead to assessor bias. This study aims to investigate whether and under what conditions coaches can be objective assessors. An intraclass correlation coefficient (ICC) was used to measure the level of agreement between assessors and coaches when using the same rubric to assess students’ deliverables. Four assessors and seven coaches from the University of Twente assessed 24 students’ individual learning processes based on individual reflection deliverables. The coaches assessed the students they supervised during a challenge-based learning (CBL) course, while the assessors were without participating in the learning process assigned randomly to students. The means were compared using SPSS, which indicated, among other things, that coaches generally awarded higher scores than assessors. This may indicate that coaches are biased because of their involvement in the learning process. Despite this, the results also indicate that coach assessment was in line with assessors when the coach was an appointed and experienced examiner
    corecore